10 research outputs found

    Neuromorphic computing using non-volatile memory

    Get PDF
    Dense crossbar arrays of non-volatile memory (NVM) devices represent one possible path for implementing massively-parallel and highly energy-efficient neuromorphic computing systems. We first review recent advances in the application of NVM devices to three computing paradigms: spiking neural networks (SNNs), deep neural networks (DNNs), and ‘Memcomputing’. In SNNs, NVM synaptic connections are updated by a local learning rule such as spike-timing-dependent-plasticity, a computational approach directly inspired by biology. For DNNs, NVM arrays can represent matrices of synaptic weights, implementing the matrix–vector multiplication needed for algorithms such as backpropagation in an analog yet massively-parallel fashion. This approach could provide significant improvements in power and speed compared to GPU-based DNN training, for applications of commercial significance. We then survey recent research in which different types of NVM devices – including phase change memory, conductive-bridging RAM, filamentary and non-filamentary RRAM, and other NVMs – have been proposed, either as a synapse or as a neuron, for use within a neuromorphic computing application. The relevant virtues and limitations of these devices are assessed, in terms of properties such as conductance dynamic range, (non)linearity and (a)symmetry of conductance response, retention, endurance, required switching power, and device variability.11Yscopu

    Unsupervised Learning Using Phase-Change Synapses and Complementary Patterns

    No full text
    Neuromorphic systems using memristive devices provide a brain-inspired alternative to the classical von Neumann processor architecture. In this work, a spiking neural network (SNN) implemented using phase-change synapses is studied. The network is equipped with a winner-take-all (WTA) mechanism and a spike-timing-dependent synaptic plasticity rule realized using crystal-growth dynamics of phase-change memristors. We explore various configurations of the synapse implementation and we demonstrate the capabilities of the phase-change-based SNN as a pattern classifier using unsupervised learning. Furthermore, we enhance the performance of the SNN by introducing an input encoding scheme that encodes information from both the original and the complementary pattern. Simulation and experimental results of the phase-change-based SNN demonstrate the learning accuracies on the MNIST handwritten digits benchmark

    Large-scale neural networks implemented with nonvolatile memory as the synaptic weight element: comparative performance analysis (accuracy, speed, and power)

    No full text
    We review our work towards achieving competitive performance (classification accuracies) for on chip machine learning (ML) of large scale artificial neural networks (ANN) using Non-Volatile Memory (NVM) based synapses, despite the inherent random and deterministic imperfections of such devices. We then show that such systems could potentially offer faster (up to 25x) and lower power (from 60–2000x) ML training than GPU–based hardware

    PCM for Neuromorphic Applications: Impact of Device Characteristics on Neural Network Performance

    No full text
    The impact of Phase Change Memory (PCM) as well as other Non-Volatile Memory (NVM) device characteristics on the quantitative classification performance of artificial neural networks is studied. Our results show that any NVM-based neural network — not just those based on PCM — can be expected to be highly resilient to random effects (device variability, yield, and stochasticity), but will be highly sensitive to “gradient” effects that act to steer all synaptic weights. Asymmetry, such as that found with PCM, can be mitigated by an occasional RESET strategy, which can be both infrequent and inaccurate. Algorithms that can finesse some of the imperfections of NVM devices are proposed

    Bidirectional Non-Filamentary RRAM as an Analog Neuromorphic Synapse, Part I: Al/Mo/Pr<sub>0.7</sub>Ca<sub>0.3</sub>MnO<sub>3</sub> Material Improvements and Device Measurements

    Get PDF
    We report on material improvements to non-filamentary RRAM devices based on Pr0.7Ca0.3MnO3 by introducing an MoOx buffer layer together with a reactive Al electrode, and on device measurements designed to help gauge the performance of these devices as bidirectional analog synapses for on-chip acceleration of the backpropagation algorithm. Previous Al/PCMO devices exhibited degraded LRS retention due to the low activation energy for oxidation of the Al electrode, and Mo/PCMO devices showed low conductance contrast. To control the redox reaction at the metal/PCMO interface, we introduce a 4-nm interfacial layer of conducting MoOx as an oxygen buffer layer. Due to the controlled redox reaction within this Al/Mo/PCMO device, we observed improvements in both retention and conductance on/off ratio. We confirm bidirectional analog synapse characteristics and measure &#x201C;jump-tables&#x201D; suitable for large scale neural network simulations that attempt to capture complex and stochastic device behavior [see companion paper]. Finally, switching energy measurements are shown, illustrating a path for future device research toward smaller devices, shorter pulses and lower programming voltages

    Accelerating Machine Learning with Non-Volatile Memory: exploring device and circuit tradeoffs

    No full text
    Large arrays of the same nonvolatile memories (NVM) being developed for Storage-Class Memory (SCM) such as Phase Change Memory (PCM) and Resistance RAM (ReRAM) - can also be used in non-Von Neumann neuromorphic computational schemes, with device conductance serving as synaptic "weight." This allows the all-important multiply-accumulate operation within these algorithms to be performed efficiently at the weight data. In contrast to other groups working on Spike-Timing Dependent Plasticity (STDP), we have been exploring the use of NVM and other inherently-analog devices for Artificial Neural Networks (ANN) trained with the backpropagation algorithm. We recently showed a large-scale (165,000 two-PCM synapses) hardware-software demo (IEDM 2014, [1], [2]) and analyzed the potential speed and power advantages over GPU-based training (IEDM 2015, [3]). In this paper, we extend this work in several useful directions. We assess the impact of undesired, time-varying conductance change, including drift in PCM and leakage of analog CMOS capacitors. We investigate the use of non-filamentary, bidirectional ReRAM devices based on PrCaMnO, with an eye to developing material variants that provide suitably linear conductance change. And finally, we explore tradeoffs in designing peripheral circuitry, balancing simplicity and area-efficiency against the impact on ANN performance

    Experimental Demonstration and Tolerancing of a Large-Scale Neural Network (165 000 Synapses) Using Phase-Change Memory as the Synaptic Weight Element

    No full text
    Using two phase-change memory devices per synapse, a three-layer perceptron network with 164 885 synapses is trained on a subset (5000 examples) of the MNIST database of handwritten digits using a backpropagation variant suitable for nonvolatile memory (NVM) + selector crossbar arrays, obtaining a training (generalization) accuracy of 82.2% (82.9%). Using a neural network simulator matched to the experimental demonstrator, extensive tolerancing is performed with respect to NVM variability, yield, and the stochasticity, linearity, and asymmetry of the NVM-conductance response. We show that a bidirectional NVM with a symmetric, linear conductance response of high dynamic range is capable of delivering the same high classification accuracies on this problem as a conventional, software-based implementation of this same network
    corecore